9 research outputs found

    Classification before regression for improving the accuracy of glucose quantification using absorption spectroscopy

    Get PDF
    This work contributes to the improvement of glucose quantification using near-infrared (NIR), mid-infrared (MIR), and combination of NIR and MIR absorbance spectroscopy by classifying the spectral data prior to the application of regression models. Both manual and automated classification are presented based on three homogeneous classes defined following the clinical definition of the glycaemic ranges (hypoglycaemia, euglycaemia, and hyperglycaemia). For the manual classification, partial least squares and principal component regressions are applied to each class separately and shown to lead to improved quantification results compared to when applying the same regression models for the whole dataset. For the automatic classification, linear discriminant analysis coupled with principal component analysis is deployed, and regressions are applied to each class separately. The results obtained are shown to outperform those of regressions for the entire dataset

    A deep neural network application for improved prediction of HbA1c in type 1 diabetes

    Get PDF
    HbA1c is a primary marker of long-term average blood glucose, which is an essential measure of successful control in type 1 diabetes. Previous studies have shown that HbA1c estimates can be obtained from 5- 12 weeks of daily blood glucose measurements. However, these methods suffer from accuracy limitations when applied to incomplete data with missing periods of measurements. The aim of this work is to overcome these limitations improving the accuracy and robustness of HbA1c prediction from time series of blood glucose. A novel data-driven HbA1c prediction model based on deep learning and convolutional neural networks is presented. The model focuses on the extraction of behavioral patterns from sequences of self-monitored blood glucose readings on various temporal scales. Assuming that subjects who share behavioral patterns have also similar capabilities for diabetes control and resulting HbA1c, it becomes possible to infer the HbA1c of subjects with incomplete data from multiple observations of similar behaviors. Trained and validated on a dataset, containing 1543 real world observation epochs from 759 subjects, the model has achieved the mean absolute error of 4.80±0.62 mmol/mol, median absolute error of 3.81±0.58 mmol/mol and R2 of 0.71±0.09 on average during the 10 fold cross validation. Automatic behavioral characterization via extraction of sequential features by the proposed convolutional neural network structure has significantly improved the accuracy of HbA1c prediction compared to the existing methods

    6G wireless communications networks: a comprehensive survey

    Get PDF
    The commercial fifth-generation (5G) wireless communications networks have already been deployed with the aim of providing high data rates. However, the rapid growth in the number of smart devices and the emergence of the Internet of Everything (IoE) applications, which require an ultra-reliable and low-latency communication, will result in a substantial burden on the 5G wireless networks. As such, the data rate that could be supplied by 5G networks will unlikely sustain the enormous ongoing data traffic explosion. This has motivated research into continuing to advance the existing wireless networks toward the future generation of cellular systems, known as sixth generation (6G). Therefore, it is essential to provide a prospective vision of the 6G and the key enabling technologies for realizing future networks. To this end, this paper presents a comprehensive review/survey of the future evolution of 6G networks. Specifically, the objective of the paper is to provide a comprehensive review/survey about the key enabling technologies for 6G networks, which include a discussion about the main operation principles of each technology, envisioned potential applications, current state-of-the-art research, and the related technical challenges. Overall, this paper provides useful information for industries and academic researchers and discusses the potentials for opening up new research directions

    Intelligent data-driven model for diabetes diurnal patterns analysis

    Get PDF
    In type 1 diabetes, diurnal activity routines are influential factors in insulin dose calculations. Bolus advisors have been developed to more accurately suggest doses of meal-related insulin based on carbohydrate intake, according to pre-set insulin to carbohydrate levels and insulin sensitivity factors. These parameters can be varied according to the time of day and their optimal setting relies on identifying the daily time periods of routines accurately. The main issues with reporting and adjustments of daily activity routines are the reliance on self-reporting which is prone to inaccuracy and within bolus calculators, the keeping of default settings for daily time periods, such as within insulin pumps, glucose meters, and mobile applications. Moreover, daily routines are subject to change over periods of time which could go unnoticed. Hence, forgetting to change the daily time periods in the bolus calculator could contribute to sub-optimal self-management. In this paper, these issues are addressed by proposing a data-driven model for identification of diabetes diurnal patterns based on self-monitoring data. The model uses time-series clustering to achieve a meaningful separation of the patterns which is then used to identify the daily time periods and to advise of any time changes required. Further improvements in bolus advisor settings are proposed to include week/weekend or even modifiable daily time settings. The proposed model provides a quick, granular, more accurate, and personalized daily time setting profile while providing a more contextual perspective to glycemic pattern identification to both patients and clinicians

    Blood glucose level prediction : advanced deep-ensemble learning approach

    No full text
    Optimal and sustainable control of blood glucose levels (BGLs) is the aim of type-1 diabetes management. The automated prediction of BGL using machine learning (ML) algorithms is considered as a promising tool that can support this aim. In this context, this paper proposes new advanced ML architectures to predict BGL leveraging deep learning and ensemble learning. The deep-ensemble models are developed with novel meta-learning approaches, where the feasibility of changing the dimension of a univariate time series forecasting task is investigated. The models are evaluated regression-wise and clinical-wise. The performance of the proposed ensemble models are compared with benchmark non-ensemble models. The results show the superior performance of the developed ensemble models over developed non-ensemble benchmark models and also show the efficacy of the proposed meta-learning approaches

    COVID-19 mortality risk assessments for individuals with and without diabetes mellitus : machine learning models integrated with interpretation framework

    No full text
    This research develops machine learning models equipped with interpretation modules for mortality risk prediction and stratification in cohorts of hospitalised coronavirus disease-2019 (COVID-19) patients with and without diabetes mellitus (DM). To this end, routinely collected clinical data from 156 COVID-19 patients with DM and 349 COVID-19 patients without DM were scrutinised. First, a random forest classifier forecasted in-hospital COVID-19 fatality utilising admission data for each cohort. For the DM cohort, the model predicted mortality risk with the accuracy of 82%, area under the receiver operating characteristic curve (AUC) of 80%, sensitivity of 80%, and specificity of 56%. For the non-DM cohort, the achieved accuracy, AUC, sensitivity, and specificity were 80%, 84%, 91%, and 56%, respectively. The models were then interpreted using SHapley Additive exPlanations (SHAP), which explained predictors’ global and local influences on model outputs. Finally, the k-means algorithm was applied to cluster patients on their SHAP values. The algorithm demarcated patients into three clusters. Average mortality rates within the generated clusters were 8%, 20%, and 76% for the DM cohort, 2.7%, 28%, and 41.9% for the non-DM cohort, providing a functional method of risk stratification
    corecore